146 research outputs found

    Sensitivity Analysis for Mirror-Stratifiable Convex Functions

    Get PDF
    This paper provides a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coined "mirror-stratifiable". These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this "mirror-stratifiable" structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms. Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata.We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations, allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions

    Prices stabilization for inexact unit-commitment problems

    Get PDF
    International audienceA widespread and successful approach to tackle unit-commitment problems is constraint decomposition: by dualizing the linking constraints, the large-scale nonconvex problem decomposes into smaller independent subproblems. The dual problem consists then in finding the best Lagrangian multiplier (the optimal "price''); it is solved by a convex nonsmooth optimization method. Realistic modeling of technical production constraints makes the subproblems themselves difficult to solve exactly. Nonsmooth optimization algorithms can cope with inexact solutions of the subproblems. In this case however, we observe that the computed dual solutions show a noisy and unstable behaviour, that could prevent their use as price indicators. In this paper, we present a simple and easy-to-implement way to stabilize dual optimal solutions, by penalizing the noisy behaviour of the prices in the dual objective. After studying the impact of a general stabilization term on the model and the resolution scheme, we focus on the penalization by discrete total variation, showing the consistency of the approach. We illustrate our stabilization on a synthetic example, and real-life problems from EDF (the French Electricity Board)

    On the bridge between combinatorial optimization and nonlinear optimization: a family of semidefinite bounds for 0-1 quadratic problems leading to quasi-Newton methods

    Get PDF
    International audienceThis article presents a family of semidefinite programming bounds, obtained by Lagrangian duality, for 0-1 quadratic optimization problems with linear or quadratic constraints. These bounds have useful computational properties: they have a good ratio of tightness to computing time, they can be optimized by a quasi-Newton method, and their final tightness level is controlled by a real parameter. These properties are illustrated on three standard combinatorial optimization problems: unconstrained 0-1 quadratic optimization, heaviest k-subgraph, and graph bisection

    Quadratic stabilization of Benders decomposition

    Get PDF
    The foundational Benders decomposition, or variable decomposition, is known to have the inherent instability of cutting plane-based methods. Several techniques have been proposed to improve this method, which has become the state of the art for important problems in operations research. This paper presents a complementary improvement featuring quadratic stabilization of the Benders cutting-plane model. Inspired by the level-bundle methods of nonsmooth optimization, this algorithmic improvement is designed to reduce the number of iterations of the method. We illustrate the interest of the stabilization on two classical problems: network design problems and hub location problems. We also prove that the stabilized Benders method has the same theoretical convergence properties as the usual Benders method

    Nonsmoothness in Machine Learning: specific structure, proximal identification, and applications

    Get PDF
    Nonsmoothness is often a curse for optimization; but it is sometimes a blessing, in particular for applications in machine learning. In this paper, we present the specific structure of nonsmooth optimization problems appearing in machine learning and illustrate how to leverage this structure in practice, for compression, acceleration, or dimension reduction. We pay a special attention to the presentation to make it concise and easily accessible, with both simple examples and general results

    Spectral (Isotropic) Manifolds and Their Dimension

    Get PDF
    International audienceA set of symmetric matrices whose ordered vector of eigenvalues belongs to a fixed set in Rn is called spectral or isotropic. In this paper, we establish that every locally symmetric submanifold M of Rn gives rise to a spectral manifold, for k ∈ {2, 3, . . . , ∞, ω}. An explicit formula for the dimension of the spectral manifold in terms of the dimension and the intrinsic properties of M is derived

    On the structure of locally symmetric manifolds

    No full text
    To appear in Journal of Convex AnalysisThis paper studies structural properties of locally symmetric submanifolds. One of the main result states that a locally symmetric submanifold M of Rn admits a locally symmetric tangential parametrization in an appropriately reduced ambient space. This property has its own interest and is the key element to establish that the spectral set consisting of all symmetric matrices having their eigenvalues on M, is a smooth submanifold of the space of symmetric matrices Sn

    Newton acceleration on manifolds identified by proximal-gradient methods

    Full text link
    Proximal methods are known to identify the underlying substructure of nonsmooth optimization problems. Even more, in many interesting situations, the output of a proximity operator comes with its structure at no additional cost, and convergence is improved once it matches the structure of a minimizer. However, it is impossible in general to know whether the current structure is final or not; such highly valuable information has to be exploited adaptively. To do so, we place ourselves in the case where a proximal gradient method can identify manifolds of differentiability of the nonsmooth objective. Leveraging this manifold identification, we show that Riemannian Newton-like methods can be intertwined with the proximal gradient steps to drastically boost the convergence. We prove the superlinear convergence of the algorithm when solving some nondegenerated nonsmooth nonconvex optimization problems. We provide numerical illustrations on optimization problems regularized by â„“1\ell_1-norm or trace-norm

    Computational results of a semidefinite branch-and-bound algorithm for k-cluster

    Get PDF
    International audienceThis computational paper presents a method to solve k-cluster problems exactly by intersecting semidefinite and polyhedral relaxations. Our algorithm uses a generic branch-and-bound method featuring an improved semidefinite bounding procedure. Extensive numerical experiments show that this algorithm outperforms the best known methods both in time and ability to solve large instances. For the first time, numerical results are reported for k-cluster problems on unstructured graphs with 160 vertices

    A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm

    Get PDF
    We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective writes a sum of smooth functions, local to each worker, and a non-smooth function. Unlike many existing methods, our distributed algorithm is adjustable to various levels of communication cost, delays, machines computational power, and functions smoothness. A unique feature is that the stepsizes do not depend on communication delays nor number of machines, which is highly desirable for scalability. We prove that the algorithm converges linearly in the strongly convex case, and provide guarantees of convergence for the non-strongly convex case. The obtained rates are the same as the vanilla proximal gradient algorithm over some introduced epoch sequence that subsumes the delays of the system. We provide numerical results on large-scale machine learning problems to demonstrate the merits of the proposed method.Comment: to appear in SIAM Journal on Optimizatio
    • …
    corecore